Skip to main content

Overview

Fetches the complete daily securities list including price band information for all listed securities from NSE Archives. Unlike fetch_incremental_price_bands.py which shows only changes, this provides the full universe of securities with their current circuit limits. Source: fetch_complete_price_bands.py
Phase: Phase 2 (Enrichment)
Output: complete_price_bands.json

Data Source

NSE Archives CSV Endpoint

GET https://nsearchives.nseindia.com/content/equities/sec_list_{date}.csv
date
string
required
Date in ddmmyyyy format (e.g., 03032024 for March 3, 2024)

Example URL

https://nsearchives.nseindia.com/content/equities/sec_list_03032024.csv

CSV Format

The NSE CSV contains comprehensive security information (example structure):
SYMBOL,NAME OF COMPANY,SERIES,DATE OF LISTING,PAID UP VALUE,MARKET LOT,ISIN NUMBER,FACE VALUE,PRICE BAND
RELIANCE,Reliance Industries Limited,EQ,29-NOV-1977,10,1,INE002A01018,10,20
TATASTEEL,Tata Steel Limited,EQ,18-NOV-1998,10,1,INE081A01012,10,20

Function Signature

def fetch_nse_security_list():
    """
    Fetches the complete daily security list with price bands from NSE Archives.
    Searches backwards up to 7 days to find the most recent file.
    Parses CSV using pandas and saves as JSON.
    """

Date Search Logic

Identical to incremental script - searches backwards from today up to 7 days:
today = datetime.now()
clean_data = []

for i in range(8):  # 0-7 days back
    check_date = today - timedelta(days=i)
    date_str = check_date.strftime("%d%m%Y")  # Format: ddmmyyyy 
    url = base_url.format(date=date_str)
    
    response = requests.get(url, headers=headers, timeout=15)
    if response.status_code == 200:
        print(f"  Found data for {date_str}!")
        # Parse and break
        break

CSV Parsing

Uses pandas for robust CSV parsing:
csv_content = response.content.decode('utf-8')
df = pd.read_csv(io.StringIO(csv_content))

# Convert to list of dictionaries
raw_data = df.to_dict(orient='records')

# Clean the data keys and values
for record in raw_data:
    cleaned_record = {}
    for k, v in record.items():
        key = k.strip() if isinstance(k, str) else k
        value = v.strip() if isinstance(v, str) else v
        cleaned_record[key] = value
    clean_data.append(cleaned_record)

Output Structure

SYMBOL
string
Stock trading symbol
NAME OF COMPANY
string
Full company name
SERIES
string
Series type (EQ, BE, etc.)
DATE OF LISTING
string
Original listing date
PAID UP VALUE
string
Paid-up share value
MARKET LOT
string
Minimum trading lot size
ISIN NUMBER
string
International Securities Identification Number
FACE VALUE
string
Face value per share
PRICE BAND
string
Current circuit limit percentage (e.g., “20”, “10”, “5”, “2”)

Example Output

[
  {
    "SYMBOL": "RELIANCE",
    "NAME OF COMPANY": "Reliance Industries Limited",
    "SERIES": "EQ",
    "DATE OF LISTING": "29-NOV-1977",
    "PAID UP VALUE": "10",
    "MARKET LOT": "1",
    "ISIN NUMBER": "INE002A01018",
    "FACE VALUE": "10",
    "PRICE BAND": "20"
  },
  {
    "SYMBOL": "TATASTEEL",
    "NAME OF COMPANY": "Tata Steel Limited",
    "SERIES": "EQ",
    "DATE OF LISTING": "18-NOV-1998",
    "PAID UP VALUE": "10",
    "MARKET LOT": "1",
    "ISIN NUMBER": "INE081A01012",
    "FACE VALUE": "10",
    "PRICE BAND": "20"
  }
]

Dependencies

  • requests — HTTP client for downloading CSV
  • json — JSON serialization
  • datetime — Date calculations
  • pandas — CSV parsing and data manipulation
  • io — In-memory file handling for CSV parsing

HTTP Headers

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
    "Accept": "*/*"
}

Configuration Differences

Compared to fetch_incremental_price_bands.py:
  • Timeout: 15 seconds (vs 10) due to larger file size
  • URL Pattern: sec_list_{date}.csv (vs eq_band_changes_{date}.csv)
  • Output File: complete_price_bands.json (vs incremental_price_bands.json)

Error Handling

  • 15-second timeout per HTTP request (larger file)
  • Gracefully handles 404 (file not found) for non-trading days
  • Continues searching backwards through date range
  • Reports parse errors but continues to next date
try:
    df = pd.read_csv(io.StringIO(csv_content))
    # Process data
except Exception as parse_error:
    print(f"  Error parsing CSV for {date_str}: {parse_error}")
    continue

Usage Example

python3 fetch_complete_price_bands.py
Expected Output:
Checking for security list on 03032024...
  Found data for 03032024!
Successfully saved 2845 securities to complete_price_bands.json
On Weekend:
Checking for security list on 02032024...
  No file found for 02032024 (404).
Checking for security list on 01032024...
  Found data for 01032024!
Successfully saved 2842 securities to complete_price_bands.json

Integration

This script is part of Phase 2 (Enrichment) in the EDL Pipeline. The output file is consumed by:
  • bulk_market_analyzer.py — Populates the “Circuit Limit” field for all stocks
  • Used as the master source of truth for current circuit limits
Run via master pipeline:
python3 run_full_pipeline.py

Data Volume

  • Typical record count: ~2,800-2,900 securities
  • File size: 2-3 MB (JSON)
  • Download time: 2-5 seconds

Use Cases

vs. Incremental Price Bands

ScriptPurposeRecord CountUse Case
fetch_incremental_price_bands.pyDaily changes only5-50Event markers for circuit revisions
fetch_complete_price_bands.pyFull universe snapshot~2,850Master lookup for current limits

Price Band Interpretation

Circuit Limit Bands:
  • 2% — Extreme surveillance (very high risk)
  • 5% — High surveillance
  • 10% — Moderate surveillance
  • 20% — Normal (standard circuit limit for most stocks)
Stocks in ASM/GSM surveillance lists typically have tighter bands (2-10%).

Data Availability Notes

  • NSE typically publishes this file on trading days only
  • File is usually available after market close (3:30 PM IST)
  • Weekends and holidays will have no file (404 response)
  • The 7-day search window ensures data freshness even after long weekends
  • Unlike incremental changes, this file is published daily and contains the complete list